Archives

Categories

Links April 2024

Ron Garret wrote an insightful refutation to 2nd amendment arguments [1].

Interesting article from the UK about British Gas losing a civil suit about bill collecting techniques that are harassment [2]. This should be a criminal offence investigated by the police and prosecuted by the CPS.

David Brin wrote a new version of his essay about dealing with blackmail in the US political system [3].

Cory Doctorow gave an insightful lecture about Enshittification for the Transmediale festival in Berlin [4]. This link has video and a transcript, I read the transcript.

The Cut has an insightful article by a journalist who gave $50k in cash to a scammer and compares the scam to techniques used to extort false confessions [5].

Truth Dig has an informative article about how Nick Bostrom is racist and how his advocacy of eugenics influences Effective Altruism and a lot of Silicon Valley [6].

Bruce Scneier and Nathan Sanders wrote an insightful article about the problems with a frontier flogan for AI development [7].

Brian Krebs wrote an informative article about the links between Chinese APT companies and the Chinese government [8].

USB PSUs

I just bought a new USB PSU from AliExpress [1]. I got this to reduce the clutter in my bedroom, I charge my laptop, PineTime, and a few phones at the same time and a single PSU with lots of ports makes it easier. Also I bought a couple of really short USB-C cables as it’s been proven by both real life tests and mathematical modelling that shorter cables get tangled less. This power supply is based on Gallium Nitride (GaN) [2] technology which makes it efficient and cool.

One thing I only learned about after that purchase is the new USB PPS standard (see the USB Wikipedia page for details [3]). The PPS (Programmable Power Supply) standard allows (quoting Wikipedia) “allowing a voltage range of 3.3 to 21 V in 20 mV steps, and a current specified in 50 mA steps, to facilitate constant-voltage and constant-current charging”. What this means in practice (when phones support it which for me will probably be 2029 or something) is that the phone could receive power exactly matching the voltage needed for the battery and not have any voltage conversion inside the phone. Phones are designed to stop charging at a certain temperature, this probably doesn’t concern people in places like Northern Europe but in Australia it can be an issue. Removing the heat dissipation from inefficiencies in voltage change circuitry means the phone will be cooler when charging and can charge at a higher rate.

There is a “Certified USB Fast Charger” logo for chargers which do this, but it seems that at the moment they just include “PPS” in the feature list. So I highly recommend that GaN and PPS be on your feature list for your next USB PSU, but failing that the 240W PSU I bought for $36 was a good deal.

Galaxy Note 9 Droidian

Droidian Support for Note 9

Droidian only supported the version of this phone with the Exynos chipset. The GSM Arena specs page for the Note 9 shows that it’s the SM-N960F part number [1]. In Australia all Note 9 phones should have the Exynos but it doesn’t hurt to ask for the part number before buying.

The status of the Note9 in Droidian went from fully supported to totally unsupported in the time I was working on this blog post. Such a rapid change is disappointing, it would be good if they at least kept the old data online. It would also be good if they didn’t require a hash character in the URL for each phone which breaks the archive.org mirroring.

Installing Droidian

Firstly Power+VolumeDown will reboot in some situations where Power button on its own won’t. The Note 9 hardware keys are:

  • Power – Right side
  • Volume up/down – long button top of the left side
  • Bixby – key for Samsung assistant that’s below the volume on the left

The Droidian install document for the Galaxy Note 9 9 now deleted is a bit confusing and unclear. Here is the install process that worked for me.

  1. The doc says to start by installing “Android 10 (Q) stock firmware”, but apparently a version of Android 10 that’s already on the phone will do for that.
  2. Download the rescue.img file and the “Droidian’s image” files from the Droidian page and extract the “Droidian’s image” zip.
  3. Connect your phone to your workstation by USB, preferably USB 3 because it will take a few minutes to transfer the image at USB 2 speed. Install the Debian package adb on the workstation.
  4. To “Unlock the bootloader” you can apparently use a PC and the Samsung software but the unlock option in the Android settings gives the same result without proprietary software, here’s how to do it:
    1. Connect the phone to Wifi. Then in settings go to “Software update”, then click on “Download and install”. Refuse to install if it offers you a new version (the unlock menu item will never appear unless you do this, so you can’t unlock without Internet access).
    2. In settings go to “About phone”, then “Software information”, then tap on “Build number” repeatedly until “Developer mode” is enabled.
    3. In settings go to the new menu “Developer options” then turn on the “OEM unlocking” option, this does a factory reset of the phone.
  5. To flash the recovery.img you apparently use Odin on Windows. I used the heimdall-flash package on Debian. On your Linux workstation run the commands:
    adb reboot download
    heimdall flash --RECOVERY recovery.img

    Then press VOLUME-UP+BIXBY+POWER as soon as it reboots to get into the recovery image. If you don’t do it soon enough it will do a default Android boot which will wipe the recovery.img you installed and also do a factory reset which will disable “Developer mode” and you will need to go back to step 4.

  6. If the above step works correctly you will have a RECOVERY menu where the main menu has options “Reboot system now”, “Apply update”, “Factory reset”, and “Advanced” in a large font. If you failed to install recovery.img then you would get a similar menu but with a tiny font which is the Samsung recovery image which won’t work so reboot and try again.
  7. When at the main recovery menu select “Advanced” and then “Enter fastboot”. Note that this doesn’t run a different program or do anything obviously different, just gives a menu – that’s OK we want it at this menu.
  8. Run “./flash_all.sh” on your workstation.
  9. Then it should boot Droidian! This may take a bit of time.

First Tests

Battery

The battery and its charge and discharge rates are very important to me, it’s what made the PinePhonePro and Librem5 unusable as daily driver phones.

After running for about 100 minutes of which about 40 minutes were playing with various settings the phone was at 89% battery. The output of “upower -d” isn’t very accurate as it reported power use ranging from 0W to 25W! But this does suggest that the phone might last for 400 minutes of real use that’s not CPU intensive, such as reading email, document editing, and web browsing. I don’t think that 6.5 hours of doing such things non-stop without access to a power supply or portable battery is something I’m ever going to do. Samsung when advertising the phone claimed 17 hours of video playback which I don’t think I’m ever going to get – or want.

After running for 11 hours it was at 58% battery. Then after just over 21 hours of running it had 13% battery. Generally I don’t trust the upower output much but the fact that it ran for over 21 hours shows that its battery life is much better than the PinePhonePro and the Librem5. During that 21 hours I’ve had a ssh session open with the client set to send ssh keep-alive messages every minute. So it had to remain active. There is an option to suspend on Droidian but they recommend you don’t use it. There is no need for the “caffeine mode” that you have on Mobian. For comparison my previous tests suggested that when doing nothing a PinePhonePro might last for 30 hours on battery while the Liberem5 might only list 10 hours [2]. This test with Droidian was done with the phone within my reach for much of that time and subject to my desire to fiddle with new technology – so it wasn’t just sleeping all the time.

When charging from the USB port on my PC it went from 13% to 27% charge in half an hour and then after just over an hour it claimed to be at 33%. It ended up taking just over 7 hours to fully charge from empty that’s not great but not too bad for a PC USB port. This is the same USB port that my Librem5 couldn’t charge from. Also the discharge:charge ratio of 21:7 is better than I could get from the PinePhonePro with Caffeine mode enabled.

rndis0

The rndis0 interface used for IP over USB doesn’t work. Droidian bug #36 [3].

Other Hardware

The phone I bought for testing is the model with 6G of RAM and 128G of storage, has a minor screen crack and significant screen burn-in. It’s a good test system for $109. The screen burn-in is very obvious when running the default Android setup but when running the default Droidian GNOME setup set to the Dark theme (which is a significant power saving with an AMOLED screen) I can’t see it at all. Buying a cheap phone with screen burn-in is something I recommend.

The stylus doesn’t work, this isn’t listed on the Droidian web page. I’m not sure if I tested the stylus when the phone was running Android, I think I did.

D State Processes

I get a kernel panic early in the startup for unknown reasons and some D state kernel threads which may or may not be related to that. Droidian bug #37 [4].

Second Phone

The Phone

I ordered a second Note9 on ebay, it had been advertised at $240 for a month and the seller accepted my offer of $200. With postage that’s $215 for a Note9 in decent condition with 8G of RAM and 512G of storage. But Droidian dropped support for the Note9 before I got to install it. At the moment I’m not sure what I’ll do with this, maybe I’ll keep it on Android.

I also bought four phone cases for $16. I got spares because of the high price of postage relative to the case cost and the fact that they may be difficult to get in a few years.

The Tests

For the next phone my plan was to do more tests on Android before upgrading it to Debian. Here are the ones I can think of now, please suggest any others I should do.

  • Log output of “ps auxf” equivalent.
  • Make notes on what they are doing with SE Linux.
  • Test the stylus.
  • Test USB networking to my workstation and my laptop.
  • Make a copy of the dmesg output. Also look for D state processes and other signs of problems.

Droidian and Security

When I tell technical people about Droidian a common reaction is “great you can get a cheap powerful phone and have better security than Android”. This is wrong in several ways. Firstly Android has quite decent security. Android runs most things in containers and uses SE Linux. Droidian has the Debian approach for most software (IE it all runs under the same UID without any special protections) and the developers have no plans to use SE Linux. I’ve previously blogged about options for Sandboxing for Debian phone use, my blog post is NOT a solution to the problem but an analysis of the different potential ways of going about solving it [5].

The next issue is that Droidian has no way to update the kernel and the installation instructions often advise downgrading Android (running a less secure kernel) before the installation. The Android Generic Kernel Image project [6] addresses this by allowing a separation between drivers supplied by the hardware vendor and the kernel image supplied by Google. This also permits running the hardware vendor’s drivers with a GKI kernel released by Google after the hardware vendor dropped security support. But this only applies to Android 11 and later, so Android 10 devices (like the Note 9 image for Droidian) miss out on this.

Kitty and Mpv

6 months ago I switched to Kitty for terminal emulation [1]. So far there’s only been one thing that I couldn’t effectively do with Kitty that I did with Konsole in the past, that is watching a music video in 1/4 of the screen while using the rest for terminals. I could setup multiple Kitty windows taking up the rest of the screen but I wanted to keep using a single Kitty with multiple terminals and just have mpv go over one of them. Kitty supports it’s own graphical interface so “mpv –vo=kitty” works but took 6* the CPU power in my tests which isn’t good for a laptop.

For X11 there’s a –ontop option for mpv that does what you expect, but that doesn’t work on Wayland. Not working is mostly Wayland’s fault as there is a long tail of less commonly used graphical operations that work in X11 but aren’t yet implemented in Wayland. I have filed a Debian bug report about this, the mpv man page should note that it’s only going to work on X11 on Linux.

I have discovered a solution to that, in the KDE settings there’s a “Window Rules” section, I created an entry for “Window class” exactly matching “mpv” and then added a rule “Keep above other windows” and set it for “force” and “yes”.

After that I can just resize mpv to occlude just one terminal and keep using the rest. Also one noteworthy thing with this is that it makes mpv go on top of the KDE taskbar, which can be a feature.

Humane AI Pin

I wrote a blog post The Shape of Computers [1] exploring ideas of how computers might evolve and how we can use them. One of the devices I mentioned was the Humane AI Pin, which has just been the recipient of one of the biggest roast reviews I’ve ever seen [2], good work Marques Brownlee! As an aside I was once given a product to review which didn’t work nearly as well as I think it should have worked so I sent an email to the developers saying “sorry this product failed to work well so I can’t say anything good about it” and didn’t publish a review.

One of the first things that caught my attention in the review is the note that the AI Pin doesn’t connect to your phone. I think that everything should connect to everything else as a usability feature. For security we don’t want so much connecting and it’s quite reasonable to turn off various connections at appropriate times for security, the Librem5 is an example of how this can be done with hardware switches to disable Wifi etc. But to just not have connectivity is bad.

The next noteworthy thing is the external battery which also acts as a magnetic attachment from inside your shirt. So I guess it’s using wireless charging through your shirt. A magnetically attached external battery would be a great feature for a phone, you could quickly swap a discharged battery for a fresh one and keep using it. When I tried to make the PinePhonePro my daily driver [3] I gave up and charging was one of the main reasons. One thing I learned from my experiment with the PinePhonePro is that the ratio of charge time to discharge time is sometimes more important than battery life and being able to quickly swap batteries without rebooting is a way of solving that. The reviewer of the AI Pin complains later in the video about battery life which seems to be partly due to wireless charging from the detachable battery and partly due to being physically small. It seems the “phablet” form factor is the smallest viable personal computer at this time.

The review glosses over what could be the regarded as the 2 worst issues of the device. It does everything via the cloud (where “the cloud” means “a computer owned by someone I probably shouldn’t trust”) and it records everything. Strange that it’s not getting the hate the Google Glass got.

The user interface based on laser projection of menus on the palm of your hand is an interesting concept. I’d rather have a Bluetooth attached tablet or something for operations that can’t be conveniently done with voice. The reviewer harshly criticises the laser projection interface later in the video, maybe technology isn’t yet adequate to implement this properly.

The first criticism of the device in the “review” part of the video is of the time taken to answer questions, especially when Internet connectivity is poor. His question “who designed the Washington Monument” took 8 seconds to start answering it in his demonstration. I asked the Alpaca LLM the same question running on 4 cores of a E5-2696 and it took 10 seconds to start answering and then printed the words at about speaking speed. So if we had a free software based AI device for this purpose it shouldn’t be difficult to get local LLM computation with less delay than the Humane device by simply providing more compute power than 4 cores of a E5-2696v3. How does a 32 core 1.05GHz Mali G72 from 2017 (as used in the Galaxy Note 9) compare to 4 cores of a 2.3GHz Intel CPU from 2015? Passmark says that Intel CPU can do 48GFlop with all 18 cores so 4 cores can presumably do about 10GFlop which seems less than the claimed 20-32GFlop of the Mali G72. It seems that with the right software even older Android phones could give adequate performance for a local LLM. The Alpaca model I’m testing with takes 4.2G of RAM to run which is usable in a Note 9 with 8G of RAM or a Pixel 8 Pro with 12G. A Pixel 8 Pro could have 4.2G of RAM reserved for a LLM and still have as much RAM for other purposes as my main laptop as of a few months ago. I consider the speed of Alpaca on my workstation to be acceptable but not great. If we can get FOSS phones running a LLM at that speed then I think it would be great for a first version – we can always rely on newer and faster hardware becoming available.

Marques notes that the cause of some of the problems is likely due to a desire to make it a separate powerful product in the future and that if they gave it phone connectivity in the start they would have to remove that later on. I think that the real problem is that the profit motive is incompatible with good design. They want to have a product that’s stand-alone and justifies the purchase price plus subscription and that means not making it a “phone accessory”. While I think that the best thing for the user is to allow it to talk to a phone, a PC, a car, and anything else the user wants. He compares it to the Apple Vision Pro which has the same issue of trying to be a stand-alone computer but not being properly capable of it.

One of the benefits that Marques cites for the AI Pin is the ability to capture voice notes. Dictaphones have been around for over 100 years and very few people have bought them, not even in the 80s when they became cheap. While almost everyone can occasionally benefit from being able to make a note of an idea when it’s not convenient to write it down there are few people who need it enough to carry a separate device, not even if that device is tiny. But a phone as a general purpose computing device with microphone can easily be adapted to such things. One possibility would be to program a phone to start a voice note when the volume up and down buttons are pressed at the same time or when some other condition is met. Another possibility is to have a phone have a hotkey function that varies by what you are doing, EG if bushwalking have the hotkey be to take a photo or if on a flight have it be taking a voice note. On the Mobile Apps page on the Debian wiki I created a section for categories of apps that I think we need [4]. In that section I added the following list:

  1. Voice input for dictation
  2. Voice assistant like Google/Apple
  3. Voice output
  4. Full operation for visually impaired people

One thing I really like about the AI Pin is that it has the potential to become a really good computing and personal assistant device for visually impaired people funded by people with full vision who want to legally control a computer while driving etc. I have some concerns about the potential uses of the AI Pin while driving (as Marques stated an aim to do), but if it replaces the use of regular phones while driving it will make things less bad.

Marques concludes his video by warning against buying a product based on the promise of what it can be in future. I bought the Librem5 on exactly that promise, the difference is that I have the source and the ability to help make the promise come true. My aim is to spend thousands of dollars on test hardware and thousands of hours of development time to help make FOSS phones a product that most people can use at low price with little effort.

Another interesting review of the pin is by Mrwhostheboss [5], one of his examples is of asking the pin for advice about a chair but without him knowing the pin selected a different chair in the room. He compares this to using Google’s apps on a phone and seeing which item the app has selected. He also said that he doesn’t want to make an order based on speech he wants to review a page of information about it. I suspect that the design of the pin had too much input from people accustomed to asking a corporate travel office to find them a flight and not enough from people who look through the details of the results of flight booking services trying to save an extra $20. Some people might say “if you need to save $20 on a flight then a $24/month subscription computing service isn’t for you”, I reject that argument. I can afford lots of computing services because I try to get the best deal on every moderately expensive thing I pay for. Another point that Mrwhostheboss makes is regarding secret SMS, you probably wouldn’t want to speak a SMS you are sending to your SO while waiting for a train. He makes it clear that changing between phone and pin while sharing resources (IE not having a separate phone number and separate data store) is a desired feature.

The most insightful point Mrwhostheboss made was when he suggested that if the pin had come out before the smartphone then things might have all gone differently, but now anything that’s developed has to be based around the expectations of phone use. This is something we need to keep in mind when developing FOSS software, there’s lots of different ways that things could be done but we need to meet the expectations of users if we want our software to be used by many people.

I previously wrote a blog post titled Considering Convergence [6] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. I’ve written a blog post about Convergence vs Transferrence [7].

Convergence vs Transference

I previously wrote a blog post titled Considering Convergence [1] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues.

Currently the expected use is that if you have web pages open on Chrome on Android it’s possible to instruct Chrome on the desktop to open the same page if both instances of Chrome are signed in to the same GMail account. It’s also possible to view the Chrome history with CTRL-H, select “tabs from other devices” and load things that were loaded on other devices some time ago. This is very minimal support for moving work between devices and I think we can do better.

Firstly for web browsing the Chrome functionality is barely adequate. It requires having a heavyweight login process on all browsers that includes sharing stored passwords etc which isn’t desirable. There are many cases where moving work is desired without sharing such things, one example is using a personal device to research something for work. Also the Chrome method of sending web pages is slow and unreliable and the viewing history method gets all closed tabs when the common case is “get the currently open tabs from one browser window” without wanting the dozens of web pages that turned out not to be interesting and were closed. This could be done with browser plugins to allow functionality similar to KDE Connect for sending tabs and also the option of emailing a list of URLs or a JSON file that could be processed by a browser plugin on the receiving end. I can send email between my home and work addresses faster than the Chrome share to another device function can send a URL.

For documents we need a way of transferring files. One possibility is to go the Chromebook route and have it all stored on the web. This means that you rely on a web based document editing system and the FOSS versions are difficult to manage. Using Google Docs or Sharepoint for everything is not something I consider an acceptable option. Also for laptop use being able to run without Internet access is a good thing.

There are a range of distributed filesystems that have been used for various purposes. I don’t think any of them cater to the use case of having a phone/laptop and a desktop PC (or maybe multiple PCs) using the same files.

For a technical user it would be an option to have a script that connects to a peer system (IE another computer with the same accounts and access control decisions) and rsync a directory of working files and the shell history, and then opens a shell with the HISTFILE variable, current directory, and optionally some user environment variables set to match. But this wouldn’t be the most convenient thing even for technical users.

For programs that are integrated into the desktop environment it’s possible for them to be restarted on login if they were active when the user logged out. The session tracking for that has about 1/4 the functionality needed for requesting a list of open files from the application, closing the application, transferring the files, and opening it somewhere else. I think that this would be a good feature to add to the XDG setup.

The model of having programs and data attached to one computer or one network server that terminals of some sort connect to worked well when computers were big and expensive. But computers continue to get smaller and cheaper so we need to think of a document based use of computers to allow things to be easily transferred as convenient. With convenience being important so the hacks of rsync scripts that can work for technical users won’t work for most people.

Source Code With Emoji

The XKCD comic Code Quality [1] inspired me to test out emoji in source. I really should have done this years ago when that XKCD was first published.

The following code compiles in gcc and runs in the way that anyone who wants to write such code would want it to run. The hover text in the XKCD comic is correct. You could have a style guide for such programming, store error messages in the doctor and nurse emoji for example.

#include <stdio.h>

int main()
{
  int 😇 = 1, 😈 = 2;
  printf("😇=%d, 😈=%d\n", 😇, 😈);
  return 0;
}

To get this to display correctly in Debian you need to install the fonts-noto-color-emoji package (used by the KDE emoji picker that runs when you press Windows-. among other things) and restart programs that use emoji. The Konsole terminal emulator will probably need it’s profile settings changed to work with this if you ran Konsole before installing fonts-noto-color-emoji. The Kitty terminal emulator works if you restart it after installing fonts-noto-color-emoji.

This web page gives a list of HTML codes for emoji [2]. If I start writing real code with emoji variable names then I’ll have to update my source to HTML conversion script (which handles <>" and repeated spaces) to convert emoji.

I spent a couple of hours on this and I think it’s worth it. I have filed several Debian bug reports about improvements needed to issues related to emoji.

Ubuntu 24.04 and Bubblewrap

When using Bubblewrap (the bwrap command) to create a container in Ubuntu 24.04 you can expect to get one of the following error messages:

bwrap: loopback: Failed RTM_NEWADDR: Operation not permitted
bwrap: setting up uid map: Permission denied

This is due to Ubuntu developers deciding to use Apparmor to restrict the creation of user namespaces. Here is a Ubuntu blog post about it [1].

To resolve that you could upgrade to SE Linux, but the other option is to create a file named /etc/apparmor.d/bwrap with the following contents:

abi <abi/4.0>,
include <tunables/global>

profile bwrap /usr/bin/bwrap flags=(unconfined) {
  userns,

  # Site-specific additions and overrides. See local/README for details.
  include if exists <local/bwrap>
}

Then run “systemctl reload apparmor“.

Software Needed for Work

When I first started studying computer science setting up a programming project was easy, write source code files and a Makefile and that was it. IRC was the only IM system and email was the only other communications system that was used much. Writing Makefiles is difficult but products like the Borland Turbo series of IDEs did all that for you so you could just start typing code and press a function key to compile and run (F5 from memory).

Over the years the requirements and expectations of computer use have grown significantly. The typical office worker is now doing many more things with computers than serious programmers used to do. Running an IM system, an online document editing system, and a series of web apps is standard for companies nowadays. Developers have to do all that in addition to tools for version control, continuous integration, bug reporting, and feature tracking. The development process is also more complex with extra steps for reproducible builds, automated tests, and code coverage metrics for the tests. I wonder how many programmers who started in the 90s would have done something else if faced with Github as their introduction.

How much of this is good? Having the ability to send instant messages all around the world is great. Having dozens of different ways of doing so is awful. When a company uses multiple IM systems such as MS-Teams and Slack and forces some of it’s employees to use them both it’s getting ridiculous. Having different friend groups on different IM systems is anti-social networking. In the EU the Digital Markets Act [1] forces some degree of interoperability between different IM systems and as it’s impossible to know who’s actually in the EU that will end up being world-wide.

In corporations document management often involves multiple ways of storing things, you have Google Docs, MS Office online, hosted Wikis like Confluence, and more. Large companies tend to use several such systems which means that people need to learn multiple systems to be able to work and they also need to know which systems are used by the various groups that they communicate with. Microsoft deserves some sort of award for the range of ways they have for managing documents, Sharepoint, OneDrive, Office Online, attachments to Teams rooms, and probably lots more.

During WW2 the predecessor to the CIA produced an excellent manual for simple sabotage [2]. If something like that was written today the section General Interference with Organisations and Production would surely have something about using as many incompatible programs and web sites as possible in the work flow. The proliferation of software required for work is a form of denial of service attack against corporations.

The efficiency of companies doesn’t really bother me. It sucks that companies are creating a demoralising workplace that is unpleasant for workers. But the upside is that the biggest companies are the ones doing the worst things and are also the most afflicted by these problems. It’s almost like the Bureau of Sabotage in some of Frank Herbert’s fiction [3].

The thing that concerns me is the effect of multiple standards on free software development. We have IRC the most traditional IM support system which is getting replaced by Matrix but we also have some projects using Telegram, and Jabber hasn’t gone away. I’m sure there are others too. There are also multiple options for version control (although github seems to dominate the market), forums, bug trackers, etc. Reporting bugs or getting support in free software often requires interacting with several of them. Developing free software usually involves dealing with the bug tracking and documentation systems of the distribution you use as well as the upstream developers of the software. If the problem you have is related to compatibility between two different pieces of free software then you can end up dealing with even more bug tracking systems.

There are real benefits to some of the newer programs to track bugs, write documentation, etc. There is also going to be a cost in changing which gives an incentive for the older projects to keep using what has worked well enough for them in the past,

How can we improve things? Use only the latest tools? Prioritise ease of use? Aim more for the entry level contributors?

ML Training License

Last year a Debian Developer blogged about writing Haskell code to give a bad result for LLMs that were trained on it. I forgot who wrote the post and I’d appreciate the URL if anyone has it.

I respect such technical work to enforce one’s legal rights when they aren’t respected by corporations, but I have a different approach.

As an aside the Fosdem lecture “Fortify AI against regulation, litigation and lobotomies” is interesting on this topic [1], it’s what inspired me to write about this.

For what I write I am at this time happy to allow it to be used as part of a large training data set (consider this blog post a licence grant that applies until such time as I edit this post to change it). But only if aggregated with so much other data that my content is only a tiny portion of the data set by any metric. So I don’t want someone to make a programming LLM that has my code as the only C code or a political data set that has my blog posts as the only left-wing content. If someone wants to train an LLM on only my content to make a Russell-simulator then I don’t license my work for that purpose but also as it’s small enough that anyone with a bit of skill could do it on a weekend I can’t stop it. I would be really interested in seeing the results if someone from the FOSS community wanted to make a Russell-simulator and would probably issue them a license for such work if asked.

If my work comprises more than 0.1% of the content in a particular measure (theme, programming language, political position, etc) in a training data set then I don’t permit that without prior discussion.

Finally if someone wants to make a FOSS training data set to be used for FOSS LLM systems (maybe under the AGPL or some similar license) then I’ll allow my writing to be used as part of that.